A decomposition of a Hilbert space into a quasi-orthogonal family of closed subspaces is introduced. We shall investigate conditions in order to derive bounded families of corresponding quasi-projectors or resolutions of the identity operator. Given a local family of atoms, or generalized stable basis, for each subspace, we show that the union of the local atoms can generate a global frame for the Hilbert space. Corresponding duals can be calculated in a flexible way by means of systems of quasi-projectors. An application to Gabor frames is presented as example of the use of this technique, for calculation of duals and explicit estimates of lattice constants. 相似文献
A rigorous decomposition approach to solve separable mixed-integer nonlinear programs where the participating functions are nonconvex is presented. The proposed algorithms consist of solving an alternating sequence of Relaxed Master Problems (mixed-integer linear program) and two nonlinear programming problems (NLPs). A sequence of valid nondecreasing lower bounds and upper bounds is generated by the algorithms which converge in a finite number of iterations. A Primal Bounding Problem is introduced, which is a convex NLP solved at each iteration to derive valid outer approximations of the nonconvex functions in the continuous space. Two decomposition algorithms are presented in this work. On finite termination, the first yields the global solution to the original nonconvex MINLP and the second finds a rigorous bound to the global solution. Convergence and optimality properties, and refinement of the algorithms for efficient implementation are presented. Finally, numerical results are compared with currently available algorithms for example problems, illuminating the potential benefits of the proposed algorithm. 相似文献
In this paper we discuss the variational inequality problems VIP(X, F), where F is assumed to be a strongly monotone mapping from n to n, and the feasible set X = [l, u] has the form of box constraints. Based on the Chen-Harker-Kanzow smoothing functions, first we present an explicit continuation algorithm (ECA) for solving VIP(X, F). The ECA possesses main features as follows: (a) at each iteration, it yields a new iterative point by solving a system of equations in (n + s) with a parameter and nonsingular Jacobian matrix, where s = |{j: - < lj < uj < +}|, (b) it generates a sequence of iterative points in the interior of the feasible set X. Secondly we give an implicit continuation algorithm (ICA) for solving VIP(X,F), the prime character of the ICA is that it solves only one, rather than a series of, system of nonlinear equations to obtain a solution of VIP(X,F). The two proposed algorithms are shown to possess strongly global convergence. Finally, some preliminary numerical results of the two algorithms are reported. 相似文献
We study the approximation problem (or problem of optimal recovery in the
$L_2$-norm) for weighted Korobov spaces with smoothness
parameter $\a$. The weights $\gamma_j$ of the Korobov spaces moderate
the behavior of periodic functions with respect to successive variables.
The nonnegative smoothness parameter $\a$ measures the decay
of Fourier coefficients. For $\a=0$, the Korobov space is the
$L_2$ space, whereas for positive $\a$, the Korobov space
is a space of periodic functions with some smoothness
and the approximation problem
corresponds to a compact operator. The periodic functions are defined on
$[0,1]^d$ and our main interest is when the dimension $d$ varies and
may be large. We consider algorithms using two different
classes of information.
The first class $\lall$ consists of arbitrary linear functionals.
The second class $\lstd$ consists of only function values
and this class is more realistic in practical computations.
We want to know when the approximation problem is
tractable. Tractability means that there exists an algorithm whose error
is at most $\e$ and whose information cost is bounded by a polynomial
in the dimension $d$ and in $\e^{-1}$. Strong tractability means that
the bound does not depend on $d$ and is polynomial in $\e^{-1}$.
In this paper we consider the worst case, randomized, and quantum
settings. In each setting, the concepts of error and cost are defined
differently and, therefore, tractability and strong tractability
depend on the setting and on the class of information.
In the worst case setting, we apply known results to prove
that strong tractability and tractability in the class $\lall$
are equivalent. This holds
if and only if $\a>0$ and the sum-exponent $s_{\g}$ of weights is finite, where
$s_{\g}= \inf\{s>0 : \xxsum_{j=1}^\infty\g_j^s\,<\,\infty\}$.
In the worst case setting for the class $\lstd$ we must assume
that $\a>1$ to guarantee that
functionals from $\lstd$ are continuous. The notions of strong
tractability and tractability are not equivalent. In particular,
strong tractability holds if and only if $\a>1$ and
$\xxsum_{j=1}^\infty\g_j<\infty$.
In the randomized setting, it is known that randomization does not
help over the worst case setting in the class $\lall$. For the class
$\lstd$, we prove that strong tractability and tractability
are equivalent and this holds under the same assumption
as for the class $\lall$ in the worst case setting, that is,
if and only if $\a>0$ and $s_{\g} < \infty$.
In the quantum setting, we consider only upper bounds for the class
$\lstd$ with $\a>1$. We prove that $s_{\g}<\infty$ implies strong
tractability.
Hence for $s_{\g}>1$, the randomized and quantum settings
both break worst case intractability of approximation for
the class $\lstd$.
We indicate cost bounds on algorithms with error at
most $\e$. Let $\cc(d)$ denote the cost of computing $L(f)$ for
$L\in \lall$ or $L\in \lstd$, and let the cost of one arithmetic
operation be taken as unity.
The information cost bound in the worst case setting for the
class $\lall$ is of order $\cc (d) \cdot \e^{-p}$
with $p$ being roughly equal to $2\max(s_\g,\a^{-1})$.
Then for the class $\lstd$
in the randomized setting,
we present an algorithm with error at most $\e$ and whose total cost is
of order $\cc(d)\e^{-p-2} + d\e^{-2p-2}$, which for small $\e$ is roughly
$$
d\e^{-2p-2}.
$$
In the quantum setting, we present a quantum algorithm
with error at most $\e$ that
uses about only $d + \log \e^{-1}$ qubits
and whose total cost is of order
$$
(\cc(d) +d) \e^{-1-3p/2}.
$$
The ratio of the costs of the algorithms in the quantum setting and
the randomized setting is of order
$$
\frac{d}{\cc(d)+d}\,\left(\frac1{\e}\right)^{1+p/2}.
$$
Hence, we have a polynomial speedup of order $\e^{-(1+p/2)}$.
We stress that $p$ can be arbitrarily large, and in this case
the speedup is huge. 相似文献
In the paper, a new hybrid genetic algorithm solving the DNA sequencing problem with negative and positive errors is presented. The algorithm has as its input a set of oligonucleotides coming from a hybridization experiment. The aim is to reconstruct an original DNA sequence of a known length on the basis of this set. No additional information about the oligonucleotides nor about the errors is assumed. Despite that, the algorithm returns for computationally hard instances surprisingly good results, of a very high similarity to original sequences. 相似文献
For each prime , let be the product of the primes less than or equal to . We have greatly extended the range for which the primality of and are known and have found two new primes of the first form ( ) and one of the second (). We supply heuristic estimates on the expected number of such primes and compare these estimates to the number actually found.
We study the complexity of approximating stochastic integrals with error for various classes of functions. For Ito integration, we show that the complexity is of order , even for classes of very smooth functions. The lower bound is obtained by showing that Ito integration is not easier than Lebesgue integration in the average case setting with the Wiener measure. The upper bound is obtained by the Milstein algorithm, which is almost optimal in the considered classes of functions. The Milstein algorithm uses the values of the Brownian motion and the integrand. It is bilinear in these values and is very easy to implement. For Stratonovich integration, we show that the complexity depends on the smoothness of the integrand and may be much smaller than the complexity of Ito integration.
A theoretical conformational study using the CICADA program package (J. Mol. Struct. (Theochem), 337 (1995) 17) was performed for two linear enkephalins, Leu-enkephalin and Met-enkephalin, and two cyclic analogues, DLFE and DPDPE. The conformational flexibilities of whole molecules and selected torsions were calculated.
The low energy conformers obtained were compared with structures obtained by spectroscopic methods. The mutual space positions of key elements for receptor recognition were analyzed. Conformations were clustered using RMS deviation computed for selected atoms. The different conformational behavior of aromatic rings in cyclic analogues of enkephalins was observed. While aromatic rings of cyclic analogues exhibit different conformational behavior, the linear enkephalins show similar behavior in these key parts.
Hydrogen bonds predicted by spectroscopic measurements were confirmed by our calculations. Also very specific conformational features, like concerted conformational movement, were analyzed. 相似文献